6 research outputs found

    Efficient multi-task based facial landmark and gesture detection in monocular images

    Get PDF
    [EN] The communication between persons includes several channels to exchange information between individuals. The non-verbal communication contains valuable information about the context of the conversation and it is a key element to understand the entire interaction. The facial expressions are a representative example of this kind of non-verbal communication and a valuable element to improve human-machine interaction interfaces. Using images captured by a monocular camera, automatic facial analysis systems can extract facial expressions to improve human-machine interactions. However, there are several technical factors to consider, including possible computational limitations (e.g. autonomous robots), or data throughput (e.g. centralized computation server). Considering the possible limitations, this work presents an efficient method to detect a set of 68 facial feature points and a set of key facial gestures at the same time. The output of this method includes valuable information to understand the context of communication and improve the response of automatic human-machine interaction systems

    On-demand serverless video surveillance with optimal deployment of deep neural networks

    Get PDF
    [EN] We present an approach to optimally deploy Deep Neural Networks (DNNs) in serverless cloud architectures. A serverless architecture allows running code in response to events, automatically managing the required computing resources. However, these resources have limitations in terms of execution environment (CPU only), cold starts, space, scalability, etc. These limitations hinder the deployment of DNNs, especially considering that fees are charged according to the employed resources and the computation time. Our deployment approach is comprised of multiple decoupled software layers that allow effectively managing multiple processes, such as business logic, data access, and computer vision algorithms that leverage DNN optimization techniques. Experimental results in AWS Lambda reveal its potential to build cost-effective ondemand serverless video surveillance systems.This work has been partially supported by the program ELKARTEK 2019 of the Basque Government under project AUTOLIB

    Building synthetic simulated environments for configuring and training multi-camera systems for surveillance applications

    Get PDF
    [EN] Synthetic simulated environments are gaining popularity in the Deep Learning Era, as they can alleviate the effort and cost of two critical tasks to build multi-camera systems for surveillance applications: setting up the camera system to cover the use cases and generating the labeled dataset to train the required Deep Neural Networks (DNNs). However, there are no simulated environments ready to solve them for all kind of scenarios and use cases. Typically, ‘ad hoc’ environments are built, which cannot be easily applied to other contexts. In this work we present a methodology to build synthetic simulated environments with sufficient generality to be usable in different contexts, with little effort. Our methodology tackles the challenges of the appropriate parameterization of scene configurations, the strategies to generate randomly a wide and balanced range of situations of interest for training DNNs with synthetic data, and the quick image capturing from virtual cameras considering the rendering bottlenecks. We show a practical implementation example for the detection of incorrectly placed luggage in aircraft cabins, including the qualitative and quantitative analysis of the data generation process and its influence in a DNN training, and the required modifications to adapt it to other surveillance contexts.This work has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation program under grant agreement No. 865162, SmaCS (https://www.smacs.eu/

    Designing Automated Deployment Strategies of Face Recognition Solutions in Heterogeneous IoT Platforms

    Get PDF
    In this paper, we tackle the problem of deploying face recognition (FR) solutions in heterogeneous Internet of Things (IoT) platforms. The main challenges are the optimal deployment of deep neural networks (DNNs) in the high variety of IoT devices (e.g., robots, tablets, smartphones, etc.), the secure management of biometric data while respecting the users’ privacy, and the design of appropriate user interaction with facial verification mechanisms for all kinds of users. We analyze different approaches to solving all these challenges and propose a knowledge-driven methodology for the automated deployment of DNN-based FR solutions in IoT devices, with the secure management of biometric data, and real-time feedback for improved interaction. We provide some practical examples and experimental results with state-of-the-art DNNs for FR in Intel’s and NVIDIA’s hardware platforms as IoT devices.This work was supported by the SHAPES project, which has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement no. 857159, and in part by the Spanish Centre for the Development of Industrial Technology (CDTI) through the Project ÉGIDA—RED DE EXCELENCIA EN TECNOLOGIAS DE SEGURIDAD Y PRIVACIDAD under Grant CER20191012

    Optimizing deep neural network deployment for intelligent security video analytics

    No full text
    200 p.La vídeo analítica inteligente para la seguridad actualmente aplica algoritmos de Inteligencia Artificial (IA) que permiten dar solución a las limitaciones de la vídeo vigilancia tradicional, que hasta ahora se basa meramente en la monitorización manual y con una capacidad muy limitada para detectar amenazas de seguridad en tiempo real.Las redes neuronales profundas (Deep Neural Networks, DNNs, en inglés) han demostrado ser altamente efectivas para el análisis de vídeo, pero su implementación en las infraestructuras actuales de sistemas inteligentes implican un reto importante debido a sus altos requisitos computacionales. Estas redes son grafos complejos de neuronas interconectadas con un gran número de parámetros y operaciones aritméticas, lo que resulta en cuellos de botella en el procesamiento. Además, las aplicaciones de visión basadas en DNNs involucran múltiples operaciones de inferencia de DNN y de pre- y post-procesamiento de datos, lo que lleva a una mayor sobrecarga de procesamiento. Aunque se han propuesto numerosas técnicas de optimización de DNNs, se necesitan nuevas estrategias de optimización en aplicaciones de extremo-a-extremo (end-to-end) basados en sistemas inteligentes de visión basados en DNNs.Con objetivo de dar respuesta a los requisitos computacionales de los DNNs, los nuevos aceleradores deIA han ido apareciendo en el mercado como (Las TPUs de Google, VPUs de Intel, etc., que se pueden denominar de manera generica como genérica "xPU", así como arquitecturas mejoradas de CPU, GPU y FPGA), diseñados para ser energéticamente eficientes y con un incremento de rendimiento muy considerable para la inferencia DNN. Sin embargo, desplegar de forma eficiente las aplicaciones basadas en DNNs dentro de estos dispositivos requiere de un expertise y know-how significativos. Además, el coste económico de estos dispositivos puede no estar justificado a menos que se utilicen eficientemente. Los sistemas de ISVA pueden estar integrados en plataformas de computación heterogéneas, descentralizadas y distribuidas como entornos en la nube (cloud), la niebla (fog) y dispositivos en extremo (edge) sin conexión o en el contexto de Internet de las Cosas (Internet of Things, IoT, en inglés). Estas plataformas requieren de un análisis exhaustivo en decidir la estrategia óptima de despliegue debido a su diversidad, incluyendo múltiples dispositivos, arquitecturas de aplicaciones y entornos virtualizados de ejecución. Desafortunadamente, estos entornos virtualizados y sus dispositivos hardware asociados no están diseñados para la inferencia DNN, y por lo tanto, no existe un soporte para desplegar estos sistemas entre la diversas opciones de los aceleradores de IA.Esta tesis tiene como objetivo optimizar el despliegue de aplicaciones de visión basadas en DNN dentro de las infraestructuras de sistemas inteligentes teniendo en cuenta los últimos avances en técnicas de optimización y estrategias de despliegue personalizadas de DNNs para visión por computador (Computer Vision, CV, en inglés). La investigación de este trabajo tiene como objetivo mejorar la eficiencia y la eficacia de los sistemas de vídeo analítica, habilitando un proceso de despliegue optimo para los sistemas inteligentes y entornos IoT. Basándose en estas técnicas y estrategias de despliegue de DNNs para CV, esta tesis pretende dar solución a los desafíos asociados a los requisitos computacionales de las aplicaciones de visión artificial basadas en DNNs.Para lograr estos objetivos, este estudio propone diferentes estrategias de despliegue de DNN para análisis de video en diferentes entornos de ejecución. Estos entornos incluyen (1) entornos en la nube serverless para delegar procesos a gran escala con una gran variabildad, (2) plataformas IoT heterogéneas y (3) dispositivos edge con recursos computacionales limitados con procesos computacionales exigentes

    Guided Bus Rapid Transit

    Get PDF
    Electrical Bus Rapid Transit (eBRT) is a charging electrical public transport which brings a clean, high performance, and affordable cost alternative from the conventional traffic vehicles which work with combustion and hybrid technology. These buses charge the battery in every bus stop to arrive at the next station. But, this charging system needs an appropriate infrastructure called pantograph, and it requires a high precision bus location to maintain battery lifetime, energy saving and charging time. To overcome this issue Vicomtech and Datik has planned a project based on computer vision to help to the driver to locate the vehicle in the correct place. In this document, we present a mono camera bus driver guided fast algorithm because these vehicles embedded computers do not support high computation and precision operations. In addition to the frequent lane sign, there are more accurate geometric beacons painted on the road to bring metric information to the vision system. This method uses segmentation to binarize the image discriminating the background space. Besides it detects, tracks and counts different lane mark contours in addition to classify each special painted mark. Besides it does not need any calibration task to calculate longitudinal and cross distances because we know the lane mark sizes
    corecore